12 research outputs found

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    From seen to unseen: Designing keyboard-less interfaces for text entry on the constrained screen real estate of Augmented Reality headsets

    Get PDF
    Text input is a very challenging task in the constrained screen real-estate of Augmented Reality headsets. Typical keyboards spread over multiple lines and occupy a significant portion of the screen. In this article, we explore the feasibility of single-line text entry systems for smartglasses. We first design FITE, a dynamic keyboard where the characters are positioned depending on their probability within the current input. However, the dynamic layout leads to mediocre text input and low accuracy. We then introduce HIBEY, a fixed 1-line solution that further decreases the screen real-estate usage by hiding the layout. Despite its hidden layout, HIBEY surprisingly performs much better than FITE, and achieves a mean text entry rate of 9.95 words per minute (WPM) with 96.06% accuracy, which is comparable to other state-of-the-art approaches. After 8 days, participants achieve an average of 13.19 WPM. In addition, HIBEY only occupies 13.14% of the screen real estate at the edge region, which is 62.80% smaller than the default keyboard layout on Microsoft Hololens.Peer reviewe

    AICP: Augmented Informative Cooperative Perception

    Get PDF
    Connected vehicles, whether equipped with advanced driver-assistance systems or fully autonomous, require human driver supervision and are currently constrained to visual information in their line-of-sight. A cooperative perception system among vehicles increases their situational awareness by extending their perception range. Existing solutions focus on improving perspective transformation and fast information collection. However, such solutions fail to filter out large amounts of less relevant data and thus impose significant network and computation load. Moreover, presenting all this less relevant data can overwhelm the driver and thus actually hinder them. To address such issues, we present Augmented Informative Cooperative Perception (AICP), the first fast-filtering system which optimizes the informativeness of shared data at vehicles to improve the fused presentation. To this end, an informativeness maximization problem is presented for vehicles to select a subset of data to display to their drivers. Specifically, we propose (i) a dedicated system design with custom data structure and lightweight routing protocol for convenient data encapsulation, fast interpretation and transmission, and (ii) a comprehensive problem formulation and efficient fitness-based sorting algorithm to select the most valuable data to display at the application layer. We implement a proof-of-concept prototype of AICP with a bandwidth-hungry, latency-constrained real-life augmented reality application. The prototype adds only 12.6 milliseconds of latency to a current informativeness-unaware system. Next, we test the networking performance of AICP at scale and show that AICP effectively filters out less relevant packets and decreases the channel busy time.Peer reviewe

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    From seen to unseen:designing keyboard-less interfaces for text entry on constrained screen real estate of Augmented Reality headsets

    No full text
    Abstract Text input is a very challenging task in the constrained screen real-estate of Augmented Reality headsets. Typical keyboards spread over multiple lines and occupy a significant portion of the screen. In this article, we explore the feasibility of single-line text entry systems for smartglasses. We first design FITE, a dynamic keyboard where the characters are positioned depending on their probability within the current input. However, the dynamic layout leads to mediocre text input and low accuracy. We then introduce HIBEY, a fixed 1-line solution that further decreases the screen real-estate usage by hiding the layout. Despite its hidden layout, HIBEY surprisingly performs much better than FITE, and achieves a mean text entry rate of 9.95 words per minute (WPM) with 96.06% accuracy, which is comparable to other state-of-the-art approaches. After 8 days, participants achieve an average of 13.19 WPM. In addition, HIBEY only occupies 13.14% of the screen real estate at the edge region, which is 62.80% smaller than the default keyboard layout on Microsoft Hololens

    One-thumb text acquisition on force-assisted miniature interfaces for mobile headsets

    No full text
    Abstract Touchscreen interfaces are shrinking and even dis-appearing on mobile headsets. The existing approaches for text acquisition on mobile headsets, for instance, speech commands and hand gestures, are cumbersome and coarse. In this paper, we show the feasibility of interaction on a miniature area as small as 12 * 13 mm 2 that offers an input alternative on small form-factor devices such as smartwatches, smart rings, or the spectacles frames of mobile headsets. To this end, we propose and implement two interaction approaches, namely FRS and DupleFR, for acquiring textual contents on mobile headsets. Both approaches leverage force-assisted interaction on a miniature-size interface. They enable the user to acquire textual content with various granularities such as characters, words, sentences, paragraphs, and the entire text. After 8 sessions, 22 participants with FRS and DupleFR achieve the peak performance of respectively 11.455 and 10.611 seconds per textual acquisition with accuracy rates of 91.41% and 94.95%. Although FRS and DupleFR as indirect manipulations are disadvantageous, they are at least 37.06% faster than the commercial standards designated to direct manipulation on touchscreens

    AICP: Augmented Informative Cooperative Perception

    Get PDF
    Connected vehicles, whether equipped with advanced driver-assistance systems or fully autonomous, require human driver supervision and are currently constrained to visual information in their line-of-sight. A cooperative perception system among vehicles increases their situational awareness by extending their perception range. Existing solutions focus on improving perspective transformation and fast information collection. However, such solutions fail to filter out large amounts of less relevant data and thus impose significant network and computation load. Moreover, presenting all this less relevant data can overwhelm the driver and thus actually hinder them. To address such issues, we present Augmented Informative Cooperative Perception (AICP), the first fast-filtering system which optimizes the informativeness of shared data at vehicles to improve the fused presentation. To this end, an informativeness maximization problem is presented for vehicles to select a subset of data to display to their drivers. Specifically, we propose (i) a dedicated system design with custom data structure and lightweight routing protocol for convenient data encapsulation, fast interpretation and transmission, and (ii) a comprehensive problem formulation and efficient fitness-based sorting algorithm to select the most valuable data to display at the application layer. We implement a proof-of-concept prototype of AICP with a bandwidth-hungry, latency-constrained real-life augmented reality application. The prototype adds only 12.6 milliseconds of latency to a current informativeness-unaware system. Next, we test the networking performance of AICP at scale and show that AICP effectively filters out less relevant packets and decreases the channel busy time.Peer reviewe

    How subtle can it get?:a trimodal study of ring-sized interfaces for one-handed drone control

    No full text
    Abstract Flying drones have become common objects in our daily lives, serving a multitude of purposes. Many of these purposes involve outdoor scenarios where the user combines drone control with another activity. Traditional interaction methods rely on physical or virtual joysticks that occupy both hands, thus restricting drone usability. In this paper, we investigate one-handed human-to-drone-interaction by leveraging three modalities: force, touch, and IMU. After prototyping three different combinations of these modalities on a smartphone, we evaluate them against the current commercial standard through two user experiments. These experiments help us to find the combination of modalities that strikes a compromise between user performance, perceived task load, wrist rotation, and interaction area size. Accordingly, we select a method that achieves faster task completion times than the two-handed commercial baseline by 16.54% with the merits of subtle user behaviours inside a small-size ring-form device and implements this method within the ring-form device. The last experiment involving 12 participants shows that thanks to its small size and weight, the ring device displays better performance than the same method implemented on a mobile phone. Furthermore, users unanimously found the device useful for controlling a drone in mobile scenarios (AVG = 3.92/5), easy to use (AVG = 3.58/5) and easy to learn (AVG = 3.58/5). Our findings give significant design clues in search of subtle and effective interaction through finger augmentation devices with drone control. The users with our prototypical system and a multi-modal on-finger device can control a drone with subtle wrist rotation (pitch gestures: 43.24° amplitude and roll gestures: 46.35° amplitude) and unnoticeable thumb presses within a miniature-sized area of (1.08 * 0.61 cm2)

    Screenshots, symbols, and personal thoughts:the role of Instagram for social activism

    No full text
    Abstract In this paper, we highlight the use of Instagram for social activism, taking 2019 Hong Kong protests as a case study. Instagram focuses on image content and provides users with few features to share or repost, limiting information propagation. Nevertheless, users who are politically active offline also share their activism on Instagram. We first evaluate the effect of protests on social media activity for protesters and non-protesters over two significant protests. Protesters’ exposure to protest-related posts is much higher than non-protesters, and their network activity follows the protest schedule. They are also much more active on posts related to the protest that they participate in than the other protest. We then analyze the images posted by the users. Users predominantly use symbols related to protests and share personal thoughts on its primary actors. Users primarily share content to raise their network’s awareness, and the content choice is directly affected by Instagram’s intrinsic interaction modalities

    MyoKey:surface electromyography and inertial motion sensing-based text entry in AR

    No full text
    Abstract The seamless textual input in Augmented Reality (AR) is very challenging and essential for enabling user-friendly AR applications. Existing approaches such as speech input and vision-based gesture recognition suffer from environmental obstacles and the large default keyboard size, sacrificing the majority of the screen’s real estate in AR. In this paper, we propose MyoKey, a system that enables users to effectively and unobtrusively input text in a constrained environment of AR by jointly leveraging surface Electromyography (sEMG) and Inertial Motion Unit (IMU) signals transmitted by wearable sensors on a user’s forearm. MyoKey adopts a deep learning-based classifier to infer hand gestures using sEMG. In order to show the feasibility of our approach, we implement a mobile AR application using the Unity application building framework. We present novel interaction and system designs to incorporate information of hand gestures from sEMG and arm motions from IMU to provide seamless text entry solution. We demonstrate the applicability of MyoKey by conducting a series of experiments achieving the accuracy of 0.91 on identifying five gestures in real-time (Inference time: 97.43 ms)
    corecore